perm filename PHIL[CUR,JMC] blob
sn#146364 filedate 1975-02-16 generic text, type T, neo UTF8
00100 PHILOSOPHICAL PRINCIPLES OF ARTIFICIAL INTELLIGENCE
00200
00300
00400 1. One of the main difficulties philosophers get into in trying
00500 to make concepts precise is the hard case, often one that no-one
00600 has previously thought of. The distinctions the case suggests
00700 seem pettifogging, and they are often unsuccessful, because someone
00800 comes up with an even harder case. The solution is to avoid hard
00900 cases by trying to do only the easy cases. This is best done
01000 by trying to say how an artificial intelligence could be provided
01100 with knowledge about (say) causality sufficient to behave
01200 intelligently in some class of cases.
01300 We then avoid trying to make a definition that fits all possible
01400 cases.
01500
01600 2. This is an example of the "third person principle". Instead of
01700 asking, "how do I know" or "how do we know" we ask "how can we
01800 make it know" and in the latter task we do not doubt any of our
01900 own knowledge.
02000
02100 3. The "principle of philosophical relativity" suggests that I
02200 am nothing special. I will account for my own experience only
02300 in ways that apply to others too. In fact, humans with their
02400 particular collections of senses are nothing special. What
02500 happens to be directly perceivable by humans is not especially
02600 likely to be philosophically or physically fundamental. The
02700 physicists understand this except when they try to be
02800 philosophers.
02900
03000 4. Counterfactual sentences are relative to theories. This is
03100 best exemplified by the automaton model.
03200
03300 5. Explanations are offered for
03400 a. I can but I won't.
03500 b. Free will
03600 in deterministic systems.
03700 c. causality.
03800 d. counterfactual sentences.
03900 e. metaphysically and epistemologically adequate theories.
04000 f. seeing the dog, imagining the dog, hallucinating the dog,
04100 seeing a dog sense-datum.
04200